Search results for "Multimodal data"
showing 5 items of 5 documents
Multimodal data as a means to understand the learning experience
2019
Most work in the design of learning technology uses click-streams as their primary data source for modelling & predicting learning behaviour. In this paper we set out to quantify what, if any, advantages do physiological sensing techniques provide for the design of learning technologies. We conducted a lab study with 251 game sessions and 17 users focusing on skill development (i.e., user's ability to master complex tasks). We collected click-stream data, as well as eye-tracking, electroencephalography (EEG), video, and wristband data during the experiment. Our analysis shows that traditional click-stream models achieve 39% error rate in predicting learning performance (and 18% when we perf…
BIG-AFF
2017
Recent research has provided solid evidence that emotions strongly affect motivation and engagement, and hence play an important role in learning. In BIG-AFF project, we build on the hypothesis that ``it is possible to provide learners with a personalised support that enriches their learning process and experience by using low intrusive (and low cost) devices to capture affective multimodal data that include cognitive, behavioural and physiological information''. In order to deal with the affect management complete cycle, thus covering affect detection, modelling and feedback, there is lack of standards and consolidated methodologies. Being our goal to develop realistic affect-aware learnin…
Fusingin vivoandex vivoNMR sources of information for brain tumor classification
2011
In this study we classify short echo-time brain magnetic resonance spectroscopic imaging (MRSI) data by applying a model-based canonical correlation analyses algorithm and by using, as prior knowledge, multimodal sources of information coming from high-resolution magic angle spinning (HR-MAS), MRSI and magnetic resonance imaging. The potential and limitations of fusing in vivo and ex vivo nuclear magnetic resonance sources to detect brain tumors is investigated. We present various modalities for multimodal data fusion, study the effect and the impact of using multimodal information for classifying MRSI brain glial tumors data and analyze which parameters influence the classification results…
The relationship between electrophysiological and hemodynamic measures of neural activity varies across picture naming tasks: A multimodal magnetoenc…
2022
Funding Information: This work was financially supported by the Academy of Finland (Finnish Center of Excellence in Computational Inference Research COIN and grants #292334, #294238 to SK; #255349, #315553 to RS; #257576 to JK; #286405 funding for TM), the Sigrid Jusélius Foundation (grant to RS), the Finnish Cultural Foundation (grant to ML), the Swedish Cultural Foundation in Finland (grant to ML), the Maud Kuistila Memorial Foundation (grant to ML), and Aalto Brain Center. Publisher Copyright: Copyright © 2022 Mononen, Kujala, Liljeström, Leppäaho, Kaski and Salmelin. Different neuroimaging methods can yield different views of task-dependent neural engagement. Studies examining the relat…
The focus and timing of gaze matters : Investigating collaborative knowledge construction in a simulation-based environment by combined video and eye…
2022
Although eye tracking has been successfully used in science education research, exploiting its potential in collaborative knowledge construction has remained sporadic. This article presents a novel approach for studying collaborative knowledge construction in a simulation-based environment by combining both the spatial and temporal dimensions of eye-tracking data with video data. For this purpose, we have investigated two undergraduate physics student pairs solving an electrostatics problem in a simulation-based environment via Zoom. The analysis of the video data of the students’ conversations focused on the different collaborative knowledge construction levels (new idea, explication, eval…